-
Notifications
You must be signed in to change notification settings - Fork 15
add QoS config for DBKvs parallel operations #6773
Conversation
In DBKvs, the poolSize in DdlConfig is used to create an ExecutorService with a fixed number of threads. This executor is used for multiput, truncateTables, and (for postgres only) most types of reads. This can result in a problem when you have one thread do a very large transaction. This single thread can submit more tasks to the executor than it has threads, and now all other threads that want to use these operations make no progress as they are later in the queue. In practice we have seen many cases where all operations except the single long-running transaction are blocked for 30 minutes or longer. This change adds new optional poolQosSize config in DdlConfig which sets a max concurrency for individual parallel tasks. The default value is 0 which means there is no enforced limit, so no behavior is changing for applications that have not overridden this config.
Generate changelog in
|
*/ | ||
@Value.Default | ||
public int poolQosSize() { | ||
return 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're okay explicitly changing behavior we could probably make this default to use poolSize() instead of be hard-coded at 0, as that is probably a more reasonable default than being unlimited.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Thanks a lot for the contribution! Reads very well too :)
Released 0.957.0 |
General
Before this PR:
See internal ticket PDS-420698
In DBKvs, the poolSize in DdlConfig is used to create an ExecutorService with a fixed number of threads. This executor is used for multiput, truncateTables, and (for postgres only) most types of reads.
This can result in a problem when you have one thread do a very large transaction. This single thread can submit more tasks to the executor than it has threads, and now all other threads that want to use these operations make no progress as they are later in the queue.
In practice we have seen many cases where all operations except the single long-running transaction are blocked for 30 minutes or longer.
After this PR:
This change adds new optional poolQosSize config in DdlConfig which sets a max concurrency for individual parallel tasks. The default value is 0 which means there is no enforced limit, so no behavior is changing for applications that have not overridden this config.
==COMMIT_MSG==
Adds optional poolQosSize config to DBKvs DdlConfig which controls the max concurrency of individual parallel operations such as multiput, truncateTables, and read operations on postgres. The default value is no limit, so behavior is not changing.
==COMMIT_MSG==
Priority:
Concerns / possible downsides (what feedback would you like?):
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
No breaks
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
No change
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Yes (safe)
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
No change in dependencies
Does this PR need a schema migration?
No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
What was existing testing like? What have you done to improve it?:
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Has the safety of all log arguments been decided correctly?:
Will this change significantly affect our spending on metrics or logs?:
How would I tell that this PR does not work in production? (monitors, etc.):
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju